Neighbor Finding Techniques for
نویسنده
چکیده
Image representation plays an important role in image processing applications. Recently there has been a considerable interest in the use of quadtrees. This has led to the development of algorithms for performing image processing tasks as well as for performing converting between the quadtree and other representations. Common to these algorithms is a traversal of the tree and the performance of a given computation at each node. These computations typically require the ability to examine adjacencies between neighboring nodes. Algorithms are given for determining such adjacencies in the horizontal, vertical, and diagonal directions. The execution times of the algorithms are analyzed using a suitably defined model. Region representation is an important aspect of image processing with numerous representations finding use. Recently, there has emerged a considerable amount of interest in the quadtree [3-8, 111. This stems primarily from its hierarchical nature, which lends itself to a compact representation. It is also quite efficient for a number of traditional image processing operations such as computing perimeters [14], labeling connected components [ 131, finding the genus of an image [I], and computing centroids and set properties [ 181. Development of algorithms to convert between the quadtree representation and other representations such as chain codes [2, lo], rasters [12, 171, binary arrays [ll], and medial axis transforms [15, 16, 191 lend further support to this importance. In this paper we discuss methods for moving between adjacent blocks in the quadtree. We first show how transitions are made between blocks of equal size and then generalize our result to blocks of different size, where the destination block is either of larger or smaller size than the source block. Such blocks are termed neighbors. Note that the transitions that we discuss also include those along diagonal, as well as horizontal and vertical, directions. The importance of these methods lies in their being the cornerstone of many of the quadtree algorithms (e.g., [ 1, 2, 10, 12-19]), since they are basically tree traversals with a " visit " at each node. More often than not these visits involve probing a node's neighbors. The significance of our methods lies in the fact that they do not use coordinate information, knowledge of the size of the image, or storage in excess of that imposed by the nature of the quadtree data structure.
منابع مشابه
Exploiting Modern Hardware for High-Dimensional Nearest Neighbor Search. (Exploitation du matériel moderne pour la recherche de plus proche voisin en haute dimensionnalité)
Many multimedia information retrieval or machine learning problems require efficient high-dimensional nearest neighbor search techniques. For instance, multimedia objects (images, music or videos) can be represented by high-dimensional feature vectors. Finding two similar multimedia objects then comes down to finding two objects that have similar feature vectors. In the current context of mass ...
متن کاملlsh, Nearest neighbor search in high dimensions
Calculating distance pairs is O(n2) in memory and time and finding the nearest neighbor is O(n) in time. Tree indexing techniques like kd-tree [2] were developed to cope with large n, however their performance quickly breaks down for p > 3 [3]. Locality sensitive hashing (LSH) [3] is a technique for generating hash numbers from high dimensional data, such that nearby points have identical hashe...
متن کاملHilbert Space Filling Curve (hsfc) Nearest Neighbor Classifier
The Nearest Neighbor algorithm is one of the simplest and oldest classification techniques. A given collection of historic data (Training Data) of known classification is stored in memory. Then based on the stored knowledge the classification of an unknown data (Test Data) is predicted by finding the classification of the nearest neighbor. For example, if an instance from the test set is presen...
متن کاملIdentification of selected monogeneans using image processing, artificial neural network and K-nearest neighbor
Abstract Over the last two decades, improvements in developing computational tools made significant contributions to the classification of biological specimens` images to their correspondence species. These days, identification of biological species is much easier for taxonomist and even non-taxonomists due to the development of automated computer techniques and systems. In this study, we d...
متن کاملTowards Optimal ǫ-Approximate Nearest Neighbor Algorithms in Constant Dimensions
The nearest neighbor (NN) problem on a set of n points P , is to build a data structure that when given a query point q, finds p ∈ P such that for all p′ ∈ P , d(p, q) ≤ d(p′, q). In low dimensions (2 or 3), this is considered a solved problem, with techniques such as Voronoi diagrams providing practical, log-height tree structures. Finding algorithms that work in arbitrary dimension has been m...
متن کاملPNNU: Parallel Nearest-Neighbor Units for Learned Dictionaries
We present a novel parallel approach, parallel nearest neighbor unit (PNNU), for finding the nearest member in a learned dictionary of high-dimensional features. This is a computation fundamental to machine learning and data analytics algorithms such as sparse coding for feature extraction. PNNU achieves high performance by using three techniques: (1) PNNU employs a novel fast table look up sch...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1980